Conversation
📝 WalkthroughWalkthroughThis PR introduces OpenAI Responses API support alongside the existing chat-completions API. New conversion functions transform Anthropic message/tool formats to OpenAI Responses structures, including JSON Schema normalization. Dynamic wire API selection via the Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant Router as OpenAI Router<br/>(index.ts)
participant Resolver as Wire API<br/>Resolver
participant ReqBuilder as Request<br/>Builder
participant OpenAI as OpenAI API
participant Adapter as Stream<br/>Adapter
Client->>Router: Call OpenAI with messages/tools
Router->>Resolver: resolveOpenAIWireAPI()
alt wire_api = 'responses'
Resolver-->>Router: 'responses'
Router->>ReqBuilder: buildOpenAIResponsesRequestBody()
ReqBuilder->>ReqBuilder: Convert Anthropic format to Responses
ReqBuilder->>ReqBuilder: Normalize JSON Schema (const→enum)
ReqBuilder-->>Router: ResponseCreateParamsStreaming
Router->>OpenAI: client.responses.create(stream)
OpenAI-->>Adapter: ResponseStreamEvent[]
Adapter->>Adapter: Convert Responses events to Anthropic
Adapter->>Adapter: Track tokens, manage tool-use lifecycle
Adapter-->>Router: BetaRawMessageStreamEvent[]
else wire_api = 'chat_completions'
Resolver-->>Router: 'chat_completions'
Router->>ReqBuilder: buildOpenAIRequestBody()
ReqBuilder-->>Router: ChatCompletionCreateParamsStreaming
Router->>OpenAI: client.chat.completions.create(stream)
OpenAI-->>Adapter: ChatCompletionChunk[]
Adapter->>Adapter: Convert chat completion events to Anthropic
Adapter-->>Router: BetaRawMessageStreamEvent[]
end
Router-->>Client: Adapted stream
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (4)
src/services/api/openai/index.ts (1)
287-287: TightenadaptedStreamtyping.Both adapters return
AsyncGenerator<BetaRawMessageStreamEvent, void>, soAsyncIterable<BetaRawMessageStreamEvent>is precise and avoidsanyper the project's no-anyguidance.♻️ Proposed change
- let adaptedStream: AsyncIterable<any> + let adaptedStream: AsyncIterable<BetaRawMessageStreamEvent>As per coding guidelines: "Prohibit
as anytype assertions in production code … UseRecord<string, unknown>instead ofanyfor objects with unknown structure".🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/services/api/openai/index.ts` at line 287, The variable adaptedStream is currently typed as AsyncIterable<any); change it to the precise type AsyncIterable<BetaRawMessageStreamEvent> because both adapters return AsyncGenerator<BetaRawMessageStreamEvent, void>; update the declaration of adaptedStream to use AsyncIterable<BetaRawMessageStreamEvent> and add/import the BetaRawMessageStreamEvent type where necessary so no `any` is used (reference: adaptedStream).src/services/api/openai/responses.ts (3)
19-28: Nit:'response'(singular) alias forOPENAI_WIRE_APIis undocumented.Accepting both
'responses'and'response'is friendly, but the docstring onmodelType(settings/types.ts:376) and the test (responses.test.ts:42-45) only mention'responses'. Either drop the singular alias or document/test it so it doesn't quietly bit-rot.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/services/api/openai/responses.ts` around lines 19 - 28, The function resolveOpenAIWireAPI currently accepts the undocumented singular alias 'response' in addition to 'responses', which conflicts with the docs/tests; either remove the singular branch from resolveOpenAIWireAPI (delete the raw === 'response' check so only 'responses' maps to 'responses') or, if you want to keep the alias, update the modelType docstring in settings/types.ts and add/adjust the test in responses.test.ts to assert that OPENAI_WIRE_API='response' also resolves to 'responses' so behavior and documentation/tests stay consistent with resolveOpenAIWireAPI.
202-211: Minor inconsistency:closeAllToolBlocksdoesn't clear its map.
closeAllTextBlockscallsopenTextBlocks.clear()after closing;closeAllToolBlocksdoesn't cleartoolBlocks. It's currently only invoked fromemitTerminal(which is guarded byterminalSeen), so behavior is fine — but the asymmetry is a foot-gun if this is ever called from a non-terminal path, andtoolBlocks.size > 0is also used at Line 446 to decide between'tool_use'and'end_turn'.♻️ Proposed change
const closeAllToolBlocks = async function* () { for (const [, block] of toolBlocks) { if (!openBlockIndices.has(block.contentIndex)) continue yield { type: 'content_block_stop', index: block.contentIndex, } as BetaRawMessageStreamEvent openBlockIndices.delete(block.contentIndex) } + // Note: do NOT clear toolBlocks here — its size is read by the + // terminal handler to choose 'tool_use' vs 'end_turn' stop_reason. }(Or clear the map and snapshot
hadToolBlocksbefore the close, depending on intent. Either way, document it.)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/services/api/openai/responses.ts` around lines 202 - 211, The closeAllToolBlocks generator currently closes open tool blocks but does not clear the toolBlocks map, creating asymmetry with closeAllTextBlocks (which calls openTextBlocks.clear()); update closeAllToolBlocks to either clear toolBlocks after yielding closes or snapshot and use a hadToolBlocks boolean before clearing so callers can still inspect prior size (refer to closeAllToolBlocks, toolBlocks, openBlockIndices, openTextBlocks.clear(), emitTerminal and terminalSeen); ensure the chosen approach is documented in a comment and preserves existing behavior when called from emitTerminal while preventing stale entries if invoked from other paths.
480-501: Replaceas anywithRecord<string, unknown>in block iteration.These are production-code casts; the project guideline calls them out specifically:
"Prohibit
as anytype assertions in production code … UseRecord<string, unknown>instead ofanyfor objects with unknown structure"
"When accessing properties likemsg.request, cast asconst req = msg.request as Record<string, unknown>for type safety"♻️ Proposed change (apply at both Line 501 and Line 577)
- const block = rawBlock as any + const block = rawBlock as Record<string, unknown>Field accesses (
block.type,block.text,block.tool_use_id,block.content,block.id,block.name,block.input) will then need narrowing (e.g.typeof block.text === 'string') or a small typed discriminator helper, but that's the spirit of the guideline.As per coding guidelines: "Prohibit
as anycasts in production code; tests may useas anyfor mock data."Also applies to: 546-577
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/services/api/openai/responses.ts` around lines 480 - 501, In convertUserMessageToResponses replace the two occurrences of the unsafe cast "as any" used when iterating message blocks with "as Record<string, unknown>" and then narrow property access before use (e.g., check typeof block.type === 'string', typeof block.text === 'string', Array.isArray(block.content'), etc.); ensure you update all referenced properties (block.type, block.text, block.tool_use_id, block.content, block.id, block.name, block.input) to be type-checked or guarded before reading so the function remains type-safe and follows the project guideline prohibiting "as any".
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/services/api/openai/index.ts`:
- Around line 286-300: The Responses branch is sending messagesForAPI and thus
omitting the deferred-tools announcement that ToolSearchTool needs; update the
responses path to use the same messagesWithDeferredToolList that the
chat-completions branch uses (i.e., pass messagesWithDeferredToolList into
buildOpenAIResponsesRequestBody) so prependDeferredToolListIfNeeded is honored;
verify usage points including buildOpenAIResponsesRequestBody,
client.responses.create, and adaptResponsesStreamToAnthropic to ensure the
message variable change is propagated.
In `@src/services/api/openai/responses.ts`:
- Around line 84-105: The function anthropicToolChoiceToResponses currently
lacks handling for the 'none' tool choice so callers intending "do not call
tools" get treated as undefined; add a case in the switch (in
anthropicToolChoiceToResponses) for type === 'none' and return the explicit
'none' ToolChoiceOptions value (similar to existing 'auto'/'any' cases), keeping
other behavior the same so 'tool' still returns a ToolChoiceFunction object with
name from tc.name.
---
Nitpick comments:
In `@src/services/api/openai/index.ts`:
- Line 287: The variable adaptedStream is currently typed as AsyncIterable<any);
change it to the precise type AsyncIterable<BetaRawMessageStreamEvent> because
both adapters return AsyncGenerator<BetaRawMessageStreamEvent, void>; update the
declaration of adaptedStream to use AsyncIterable<BetaRawMessageStreamEvent> and
add/import the BetaRawMessageStreamEvent type where necessary so no `any` is
used (reference: adaptedStream).
In `@src/services/api/openai/responses.ts`:
- Around line 19-28: The function resolveOpenAIWireAPI currently accepts the
undocumented singular alias 'response' in addition to 'responses', which
conflicts with the docs/tests; either remove the singular branch from
resolveOpenAIWireAPI (delete the raw === 'response' check so only 'responses'
maps to 'responses') or, if you want to keep the alias, update the modelType
docstring in settings/types.ts and add/adjust the test in responses.test.ts to
assert that OPENAI_WIRE_API='response' also resolves to 'responses' so behavior
and documentation/tests stay consistent with resolveOpenAIWireAPI.
- Around line 202-211: The closeAllToolBlocks generator currently closes open
tool blocks but does not clear the toolBlocks map, creating asymmetry with
closeAllTextBlocks (which calls openTextBlocks.clear()); update
closeAllToolBlocks to either clear toolBlocks after yielding closes or snapshot
and use a hadToolBlocks boolean before clearing so callers can still inspect
prior size (refer to closeAllToolBlocks, toolBlocks, openBlockIndices,
openTextBlocks.clear(), emitTerminal and terminalSeen); ensure the chosen
approach is documented in a comment and preserves existing behavior when called
from emitTerminal while preventing stale entries if invoked from other paths.
- Around line 480-501: In convertUserMessageToResponses replace the two
occurrences of the unsafe cast "as any" used when iterating message blocks with
"as Record<string, unknown>" and then narrow property access before use (e.g.,
check typeof block.type === 'string', typeof block.text === 'string',
Array.isArray(block.content'), etc.); ensure you update all referenced
properties (block.type, block.text, block.tool_use_id, block.content, block.id,
block.name, block.input) to be type-checked or guarded before reading so the
function remains type-safe and follows the project guideline prohibiting "as
any".
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 19b91be6-2315-4fa3-bc2f-a5eb89ae4b20
📒 Files selected for processing (5)
src/services/api/openai/__tests__/responses.test.tssrc/services/api/openai/index.tssrc/services/api/openai/responses.tssrc/utils/managedEnvConstants.tssrc/utils/settings/types.ts
| // 12. Call OpenAI API with streaming | ||
| const requestBody = buildOpenAIRequestBody({ | ||
| model: openaiModel, | ||
| messages: openaiMessages, | ||
| tools: openaiTools, | ||
| toolChoice: openaiToolChoice, | ||
| enableThinking, | ||
| maxTokens, | ||
| temperatureOverride: options.temperatureOverride, | ||
| }) | ||
| const stream = await client.chat.completions.create( | ||
| requestBody, | ||
| { signal }, | ||
| ) | ||
| let adaptedStream: AsyncIterable<any> | ||
| if (wireAPI === 'responses') { | ||
| const requestBody = buildOpenAIResponsesRequestBody({ | ||
| model: openaiModel, | ||
| messages: messagesForAPI, | ||
| systemPrompt, | ||
| tools: standardTools, | ||
| toolChoice: options.toolChoice, | ||
| enableThinking, | ||
| maxTokens, | ||
| temperatureOverride: options.temperatureOverride, | ||
| }) | ||
| const stream = await client.responses.create(requestBody, { signal }) | ||
| adaptedStream = adaptResponsesStreamToAnthropic(stream, openaiModel) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Confirm the return type of normalizeMessagesForAPI to validate the type-mismatch claim.
fd -t f 'messages.ts' src/utils | xargs rg -n -A6 'export function normalizeMessagesForAPI'Repository: claude-code-best/claude-code
Length of output: 375
🏁 Script executed:
#!/bin/bash
# Read the file to see actual types and variable assignments
sed -n '160,170p' src/services/api/openai/index.ts | cat -n
sed -n '220,240p' src/services/api/openai/index.ts | cat -n
sed -n '286,310p' src/services/api/openai/index.ts | cat -nRepository: claude-code-best/claude-code
Length of output: 2492
🏁 Script executed:
#!/bin/bash
# Find buildOpenAIResponsesRequestBody definition to see what type it expects
rg -n 'export function buildOpenAIResponsesRequestBody|function buildOpenAIResponsesRequestBody' src/services/api/openai/ -A 10Repository: claude-code-best/claude-code
Length of output: 843
🏁 Script executed:
#!/bin/bash
# Check what buildOpenAIResponsesRequestBody does with messages internally
sed -n '107,150p' src/services/api/openai/responses.ts | cat -nRepository: claude-code-best/claude-code
Length of output: 1698
🏁 Script executed:
#!/bin/bash
# Check prependDeferredToolListIfNeeded signature to validate the proposed fix
rg -n 'export function prependDeferredToolListIfNeeded|function prependDeferredToolListIfNeeded' src/ -A 5Repository: claude-code-best/claude-code
Length of output: 494
Responses branch omits the deferred-tools announcement, breaking tool search.
The responses branch passes messagesForAPI (line 291) while the chat-completions branch uses messagesWithDeferredToolList (derived from line 226-231). This means the responses path skips the <available-deferred-tools>…</available-deferred-tools> user message that ToolSearchTool relies on (per prependDeferredToolListIfNeeded's docstring — OpenAI-compatible endpoints can't consume Anthropic's defer_loading/tool_reference payloads). Tool search will be broken for users on the Responses wire API.
Both branches accept the same message type, so the fix is straightforward:
🐛 Proposed fix
if (wireAPI === 'responses') {
const requestBody = buildOpenAIResponsesRequestBody({
model: openaiModel,
- messages: messagesForAPI,
+ messages: messagesWithDeferredToolList,
systemPrompt,
tools: standardTools,
toolChoice: options.toolChoice,
enableThinking,
maxTokens,
temperatureOverride: options.temperatureOverride,
})Worth adding a tool-search test case for the responses path to lock this in.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // 12. Call OpenAI API with streaming | |
| const requestBody = buildOpenAIRequestBody({ | |
| model: openaiModel, | |
| messages: openaiMessages, | |
| tools: openaiTools, | |
| toolChoice: openaiToolChoice, | |
| enableThinking, | |
| maxTokens, | |
| temperatureOverride: options.temperatureOverride, | |
| }) | |
| const stream = await client.chat.completions.create( | |
| requestBody, | |
| { signal }, | |
| ) | |
| let adaptedStream: AsyncIterable<any> | |
| if (wireAPI === 'responses') { | |
| const requestBody = buildOpenAIResponsesRequestBody({ | |
| model: openaiModel, | |
| messages: messagesForAPI, | |
| systemPrompt, | |
| tools: standardTools, | |
| toolChoice: options.toolChoice, | |
| enableThinking, | |
| maxTokens, | |
| temperatureOverride: options.temperatureOverride, | |
| }) | |
| const stream = await client.responses.create(requestBody, { signal }) | |
| adaptedStream = adaptResponsesStreamToAnthropic(stream, openaiModel) | |
| // 12. Call OpenAI API with streaming | |
| let adaptedStream: AsyncIterable<any> | |
| if (wireAPI === 'responses') { | |
| const requestBody = buildOpenAIResponsesRequestBody({ | |
| model: openaiModel, | |
| messages: messagesWithDeferredToolList, | |
| systemPrompt, | |
| tools: standardTools, | |
| toolChoice: options.toolChoice, | |
| enableThinking, | |
| maxTokens, | |
| temperatureOverride: options.temperatureOverride, | |
| }) | |
| const stream = await client.responses.create(requestBody, { signal }) | |
| adaptedStream = adaptResponsesStreamToAnthropic(stream, openaiModel) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/services/api/openai/index.ts` around lines 286 - 300, The Responses
branch is sending messagesForAPI and thus omitting the deferred-tools
announcement that ToolSearchTool needs; update the responses path to use the
same messagesWithDeferredToolList that the chat-completions branch uses (i.e.,
pass messagesWithDeferredToolList into buildOpenAIResponsesRequestBody) so
prependDeferredToolListIfNeeded is honored; verify usage points including
buildOpenAIResponsesRequestBody, client.responses.create, and
adaptResponsesStreamToAnthropic to ensure the message variable change is
propagated.
| export function anthropicToolChoiceToResponses( | ||
| toolChoice: unknown, | ||
| ): ToolChoiceOptions | ToolChoiceFunction | undefined { | ||
| if (!toolChoice || typeof toolChoice !== 'object') return undefined | ||
|
|
||
| const tc = toolChoice as Record<string, unknown> | ||
| const type = tc.type as string | ||
|
|
||
| switch (type) { | ||
| case 'auto': | ||
| return 'auto' | ||
| case 'any': | ||
| return 'required' | ||
| case 'tool': | ||
| return { | ||
| type: 'function', | ||
| name: tc.name as string, | ||
| } | ||
| default: | ||
| return undefined | ||
| } | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
OpenAI Responses API tool_choice values for ToolChoiceOptions
💡 Result:
In the OpenAI Responses API, the tool_choice parameter accepts values of type ToolChoiceOptions, which are the string literals: "none", "auto", or "required". - "none": The model will not call any tool and instead generates a message. - "auto": The model can pick between generating a message or calling one or more tools (default behavior). - "required": The model must call one or more tools. Note that tool_choice can also accept other union types beyond ToolChoiceOptions, such as ToolChoiceAllowed (for restricting to specific tools), ToolChoiceTypes (e.g., {"type": "file_search"}), ToolChoiceFunction (e.g., {"type": "function", "name": "get_weather"}), ToolChoiceMcp, ToolChoiceCustom, ToolChoiceApplyPatch, and ToolChoiceShell. However, ToolChoiceOptions specifically refers to the basic string options controlling whether tools are optional, automatic, or mandatory. The Responses API extends the Chat Completions API with additional tool types (e.g., MCP, built-in tools like computer_use), but the ToolChoiceOptions values remain consistent.
Citations:
- 1: https://developers.openai.com/api/docs/apit-reference/responses
- 2: https://javadoc.io/static/com.openai/openai-java/4.0.0/com/openai/models/responses/Response.ToolChoice.html
- 3: https://developers.openai.com/api/docs/guides/function-calling
🏁 Script executed:
# First, let's verify the file exists and check the problematic lines
wc -l src/services/api/openai/responses.tsRepository: claude-code-best/claude-code
Length of output: 113
🏁 Script executed:
# Read the function and surrounding code (lines 84-105)
sed -n '84,105p' src/services/api/openai/responses.tsRepository: claude-code-best/claude-code
Length of output: 591
🏁 Script executed:
# Check lines 501 and 577 for the `as any` casts
sed -n '500,502p' src/services/api/openai/responses.ts
sed -n '576,578p' src/services/api/openai/responses.tsRepository: claude-code-best/claude-code
Length of output: 312
🏁 Script executed:
# Search for callers of anthropicToolChoiceToResponses to see if 'none' is ever passed
rg -n 'anthropicToolChoiceToResponses' src/Repository: claude-code-best/claude-code
Length of output: 274
🏁 Script executed:
# Check the context of where anthropicToolChoiceToResponses is called (line 129)
sed -n '107,145p' src/services/api/openai/responses.tsRepository: claude-code-best/claude-code
Length of output: 1224
🏁 Script executed:
# Search for any references to 'none' in tool_choice context or Anthropic API patterns
rg -n "tool_choice.*none|'none'|\"none\"" src/services/api/Repository: claude-code-best/claude-code
Length of output: 700
🏁 Script executed:
# Search for Anthropic API imports or type definitions related to tool_choice
rg -n "ToolChoice|tool_choice" src/services/api/openai/responses.ts -A 2 -B 2Repository: claude-code-best/claude-code
Length of output: 1037
🏁 Script executed:
# Check if there's any documentation or comments about Anthropic's tool_choice format
rg -n "Anthropic" src/services/api/openai/responses.ts | head -20Repository: claude-code-best/claude-code
Length of output: 132
🏁 Script executed:
# Search for Anthropic API type definitions or documentation about tool_choice
fd -e ts -e tsx | xargs grep -l "tool_choice" | head -10Repository: claude-code-best/claude-code
Length of output: 479
🏁 Script executed:
# Check if there's any test or example code showing Anthropic tool_choice usage
rg -n "type.*none|tool_choice.*=.*none" src/ -A 2 -B 2Repository: claude-code-best/claude-code
Length of output: 5527
🏁 Script executed:
# Look at the actual Anthropic types being used in the codebase
rg -n "BetaToolChoiceUnion|ToolChoice" src/services/api/ -A 3Repository: claude-code-best/claude-code
Length of output: 5636
🏁 Script executed:
# Let's check what the test file shows about expected tool_choice values
cat src/services/api/openai/__tests__/responses.test.ts | head -100Repository: claude-code-best/claude-code
Length of output: 2552
🏁 Script executed:
# Search for any documentation or usage of tool_choice with 'none' in Anthropic context
rg -n "type.*:.*none\|'none'" src/services/api/ -B 3 -A 3Repository: claude-code-best/claude-code
Length of output: 54
🏁 Script executed:
# Check the Anthropic SDK imports to understand BetaToolChoice types
rg -n "from.*@anthropic" src/services/api/openai/responses.ts -A 5Repository: claude-code-best/claude-code
Length of output: 434
🏁 Script executed:
# Let's look at line 501 and 577 more carefully with full context
sed -n '495,510p' src/services/api/openai/responses.ts
sed -n '571,585p' src/services/api/openai/responses.tsRepository: claude-code-best/claude-code
Length of output: 935
🏁 Script executed:
# Search for what Anthropic's tool_choice types are imported as
rg -n "BetaToolChoice" src/ -A 2Repository: claude-code-best/claude-code
Length of output: 516
🏁 Script executed:
# Check if there's any documentation or comment about Anthropic's tool_choice including 'none'
rg -n "tool_choice.*{.*type" src/services/api/ -B 2 -A 2Repository: claude-code-best/claude-code
Length of output: 189
🏁 Script executed:
# Look for any existing handling of 'none' in other adapters (Gemini, Grok)
rg -n "anthropicToolChoiceTo" src/services/api/Repository: claude-code-best/claude-code
Length of output: 1473
🏁 Script executed:
# Check what the Anthropic SDK types actually support
grep -r "BetaToolChoice" node_modules/@anthropic-ai/sdk 2>/dev/null | grep -i "type.*none" | head -5Repository: claude-code-best/claude-code
Length of output: 54
🏁 Script executed:
# Let's verify the exact context around the as any casts to understand their severity
sed -n '480,520p' src/services/api/openai/responses.tsRepository: claude-code-best/claude-code
Length of output: 1240
🏁 Script executed:
# Check if there are type guards or proper type checking before using block properties
sed -n '500,530p' src/services/api/openai/responses.tsRepository: claude-code-best/claude-code
Length of output: 920
🏁 Script executed:
# Let's verify one more time that the tool_choice 'none' fix would be correct
# by checking if there are any other callers or test cases we should be aware of
rg -n "toolChoice" src/services/api/openai/ -B 2 -A 2Repository: claude-code-best/claude-code
Length of output: 5060
🏁 Script executed:
# Check if buildOpenAIResponsesRequestBody is properly tested with different tool_choice values
grep -n "toolChoice" src/services/api/openai/__tests__/responses.test.tsRepository: claude-code-best/claude-code
Length of output: 126
Add handler for tool_choice: 'none' case.
The anthropicToolChoiceToResponses function doesn't handle the 'none' type, which is a valid value in OpenAI Responses API's ToolChoiceOptions. When an explicit "do not call tools" choice is passed, it currently returns undefined and gets filtered out, silently defaulting to OpenAI's 'auto' behavior instead of respecting the caller's intent.
Fix
switch (type) {
case 'auto':
return 'auto'
case 'any':
return 'required'
+ case 'none':
+ return 'none'
case 'tool':
return {
type: 'function',
name: tc.name as string,
}
default:
return undefined
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/services/api/openai/responses.ts` around lines 84 - 105, The function
anthropicToolChoiceToResponses currently lacks handling for the 'none' tool
choice so callers intending "do not call tools" get treated as undefined; add a
case in the switch (in anthropicToolChoiceToResponses) for type === 'none' and
return the explicit 'none' ToolChoiceOptions value (similar to existing
'auto'/'any' cases), keeping other behavior the same so 'tool' still returns a
ToolChoiceFunction object with name from tc.name.
|
请给一下测试结果哈 |
|
#349 |
支持 openai provider 通过 OPENAI_WIRE_API 切换到 responses API
Summary by CodeRabbit
New Features
OPENAI_WIRE_APIenvironment variable to switch between API endpoints.Documentation